1,058 research outputs found

    Oil and Water? High Performance Garbage Collection in Java with MMTk

    Get PDF
    Increasingly popular languages such as Java and C # require efficient garbage collection. This paper presents the design, implementation, and evaluation of MMTk, a Memory Management Toolkit for and in Java. MMTk is an efficient, composable, extensible, and portable framework for building garbage collectors. MMTk uses design patterns and compiler cooperation to combine modularity and efficiency. The resulting system is more robust, easier to maintain, and has fewer defects than monolithic collectors. Experimental comparisons with monolithic Java and C implementations reveal MMTk has significant performance advantages as well. Performance critical system software typically uses monolithic C at the expense of flexibility. Our results refute common wisdom that only this approach attains efficiency, and suggest that performance critical software can embrace modular design and high-level languages.

    Some of the self-employed are still struggling amid uncertainty about whether they could claim grants

    Get PDF
    The self-employed have not recovered from the pandemic despite five rounds of government support, find Robert Blackburn (University of Liverpool), Stephen Machin and Maria Ventura (LSE). More than a quarter are in financial trouble and there has been increasing confusion about their eligibility for grants

    Down for the Count? Getting Reference Counting Back in the Ring

    Get PDF
    Reference counting and tracing are the two fundamental approaches that have underpinned garbage collection since 1960. However, despite some compelling advantages, reference counting is almost completely ignored in implementations of high performance systems today. In this paper we take a detailed look at reference counting to understand its behavior and to improve its performance. We identify key design choices for reference counting and analyze how the behavior of a wide range of benchmarks might affect design decisions. As far as we are aware, this is the first such quantitative study of reference counting. We use insights gleaned from this analysis to introduce a number of optimizations that significantly improve the performance of reference counting. We find that an existing modern implementation of reference counting has an average 30% overhead compared to tracing, and that in combination, our optimizations are able to completely eliminate that overhead. This brings the performance of reference counting on par with that of a well tuned mark-sweep collector. We keep our in-depth analysis of reference counting as general as possible so that it may be useful to other garbage collector implementers. Our finding that reference counting can be made directly competitive with well tuned mark-sweep should shake the community's prejudices about reference counting and perhaps open new opportunities for exploiting reference counting's strengths, such as localization and immediacy of reclamation

    Beltway: Getting Around Garbage Collection Gridlock

    Get PDF
    We present the design and implementation of a new garbage collection framework that significantly generalizes existing copying collectors. The Beltway framework exploits and separates object age and incrementality. It groups objects in one or more increments on queues called belts, collects belts independently, and collects increments on a belt in first-in-first-out order. We show that Beltway configurations, selected by command line options, act and perform the same as semi-space, generational, and older-first collectors, and encompass all previous copying collectors of which we are aware. The increasing reliance on garbage collected languages such as Java requires that the collector perform well. We show that the generality of Beltway enables us to design and implement new collectors that are robust to variations in heap size and improve total execution time over the best generational copying collectors of which we are aware by up to 40%, and on average by 5 to 10%, for small to moderate heap sizes. New garbage collection algorithms are rare, and yet we define not just one, but a new family of collectors that subsumes previous work. This generality enables us to explore a larger design space and build better collectors

    Distilling the Real Cost of Production Garbage Collectors

    Get PDF
    Abridged abstract: despite the long history of garbage collection (GC) and its prevalence in modern programming languages, there is surprisingly little clarity about its true cost. Without understanding their cost, crucial tradeoffs made by garbage collectors (GCs) go unnoticed. This can lead to misguided design constraints and evaluation criteria used by GC researchers and users, hindering the development of high-performance, low-cost GCs. In this paper, we develop a methodology that allows us to empirically estimate the cost of GC for any given set of metrics. By distilling out the explicitly identifiable GC cost, we estimate the intrinsic application execution cost using different GCs. The minimum distilled cost forms a baseline. Subtracting this baseline from the total execution costs, we can then place an empirical lower bound on the absolute costs of different GCs. Using this methodology, we study five production GCs in OpenJDK 17, a high-performance Java runtime. We measure the cost of these collectors, and expose their respective key performance tradeoffs. We find that with a modestly sized heap, production GCs incur substantial overheads across a diverse suite of modern benchmarks, spending at least 7-82% more wall-clock time and 6-92% more CPU cycles relative to the baseline cost. We show that these costs can be masked by concurrency and generous provisioning of memory/compute. In addition, we find that newer low-pause GCs are significantly more expensive than older GCs, and, surprisingly, sometimes deliver worse application latency than stop-the-world GCs. Our findings reaffirm that GC is by no means a solved problem and that a low-cost, low-latency GC remains elusive. We recommend adopting the distillation methodology together with a wider range of cost metrics for future GC evaluations.Comment: Camera-ready versio

    Unconventional superconductivity in the nickel-chalcogenide superconductor, TlNi2_2Se2_2

    Full text link
    We present the results of a study of the vortex lattice (VL) of the nickel chalcogenide superconductor TlNi2Se2, using small angle neutron scattering. This superconductor has the same crystal symmetry as the iron arsenide materials. Previous work points to it being a two-gap superconductor, with an unknown pairing mechanism. No structural transitions in the vortex lattice are seen in the phase diagram, arguing against d-wave gap symmetry. Empirical fits of the temperature-dependence of the form factor and penetration depth rule out a simple s-wave model, supporting the presence of nodes in the gap function. The variation of the VL opening angle with field is consistent with earlier reports of of multiple gaps

    Draining the Swamp: Micro Virtual Machines as Solid Foundation for Language Development

    Get PDF
    Many of today\u27s programming languages are broken. Poor performance, lack of features and hard-to-reason-about semantics can cost dearly in software maintenance and inefficient execution. The problem is only getting worse with programming languages proliferating and hardware becoming more complicated. An important reason for this brokenness is that much of language design is implementation-driven. The difficulties in implementation and insufficient understanding of concepts bake bad designs into the language itself. Concurrency, architectural details and garbage collection are three fundamental concerns that contribute much to the complexities of implementing managed languages. We propose the micro virtual machine, a thin abstraction designed specifically to relieve implementers of managed languages of the most fundamental implementation challenges that currently impede good design. The micro virtual machine targets abstractions over memory (garbage collection), architecture (compiler backend), and concurrency. We motivate the micro virtual machine and give an account of the design and initial experience of a concrete instance, which we call Mu, built over a two year period. Our goal is to remove an important barrier to performant and semantically sound managed language design and implementation

    Employee reactions to talent pool membership

    Get PDF
    Purpose – Despite a large literature on talent management there is very little research on the comparative attitudes of employees in talent pools with those not in talent pools. This is an important omission as employee reactions should influence how effective talent programmes are and how they can be designed and evaluated. Consequently, the purpose of this paper is to explore the work-related attitudes of employees who are members and non-members of talent pools. Design/methodology/approach – Matched samples of employees working in a single public sector, scientific organization were surveyed using a standard survey and open questioning to elicit and compare the voices of included and excluded employees. Findings – Employees in talent pools were more positive about their future prospects than employees outside talent pools who reported feelings of lower support from the organization, stronger feelings of unfairness and had lower expectations of the organization’s interest in them. Research limitations/implications – More matched-sample studies are necessary to further understand how employee reactions to talent pool membership are mediated by context. Practical implications – Organizations should consider how employees will react to the design and implementation of talent pools and try to alleviate any adverse reactions. Two threats in particular are the depression of affect among excluded employees and failure to sustain positive affect among the included employees. Originality/value – This is one of very few studies to explore employee reactions to talent programmes in a single organization. The single-site design controls for a large number of variables that confound inter-organizational studies of talent pool membership
    corecore